public scrutiny
AI, Climate, and Transparency: Operationalizing and Improving the AI Act
Alder, Nicolas, Ebert, Kai, Herbrich, Ralf, Hacker, Philipp
This paper critically examines the AI Act's provisions on climate-related transparency, highlighting significant gaps and challenges in its implementation. We identify key shortcomings, including the exclusion of energy consumption during AI inference, the lack of coverage for indirect greenhouse gas emissions from AI applications, and the lack of standard reporting methodology. The paper proposes a novel interpretation to bring inference-related energy use back within the Act's scope and advocates for public access to climate-related disclosures to foster market accountability and public scrutiny. Cumulative server level energy reporting is recommended as the most suitable method. We also suggests broader policy changes, including sustainability risk assessments and renewable energy targets, to better address AI's environmental impact.
Black-Boxed Politics:
As numerous authors have documented, the idea of creating artificial, intelligent machines has entranced and scandalized people for millennia. Indeed, part of what makes the history of'artificial intelligence' so fascinating is the mix of genuine scientific achievement with myth-making and outright deception. A certain amount of hype and myth making can be harmless, and might even help to fuel real progress in the field. However, the fact that'AI systems' are now being integrated into essential public services and other high-risk processes means that we must be especially vigilant about combatting misconceptions about AI. At various points throughout 2019, we saw users of Amazon's Alexa, Google's Assistant, and Apple's Siri being shocked to discover that recordings of their private family conversations were being reviewed by real living humans. This was hardly surprising to anyone familiar with how these voice assistants are trained. But to the majority of customers, who do not question the presentation of these systems as 100% automated, it came as a shock that poorly paid overseas workers had access to what were often intimate and sensitive conversations.
- Europe > United Kingdom (0.28)
- North America > United States > Virginia (0.04)
- Law (1.00)
- Health & Medicine (1.00)
- Social Sector (0.94)
- Government (0.94)
What Facebook's public scrutiny can teach us about AI in health care
The question caught Zuckerberg off guard. No," Zuckerberg eventually answered, after thinking about it for some time. As this question illustrates, the recent firestorm over Facebook's involvement in the Cambridge Analytica scandal is less about the data breach and more about the right to privacy and the limits to that right. It is also highlighting the ethical conflicts of implementing artificial intelligence without thoroughly considering societal norms. This scandal has some important bioethics lessons for health care leaders who are building machine learning and artificial intelligence models for clinical decision-making.
- Information Technology > Services (0.87)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.51)
Public Scrutiny of Automated Decisions: Early Lessons and Emerging Methods
Automated decisions are increasingly part of everyday life, but how can the public scrutinize, understand, and govern them? To begin to explore this, Omidyar Network has, in partnership with Upturn, published Public Scrutiny of Automated Decisions: Early Lessons and Emerging Methods. The report is based on an extensive review of computer and social science literature, a broad array of real-world attempts to study automated systems, and dozens of conversations with global digital rights advocates, regulators, technologists, and industry representatives. It maps out the landscape of public scrutiny of automated decision-making, both in terms of what civil society was or was not doing in this nascent sector and what laws and regulations were or were not in place to help regulate it.